51 research outputs found

    GPU data structures for graphics and vision

    Get PDF
    Graphics hardware has in recent years become increasingly programmable, and its programming APIs use the stream processor model to expose massive parallelization to the programmer. Unfortunately, the inherent restrictions of the stream processor model, used by the GPU in order to maintain high performance, often pose a problem in porting CPU algorithms for both video and volume processing to graphics hardware. Serial data dependencies which accelerate CPU processing are counterproductive for the data-parallel GPU. This thesis demonstrates new ways for tackling well-known problems of large scale video/volume analysis. In some instances, we enable processing on the restricted hardware model by re-introducing algorithms from early computer graphics research. On other occasions, we use newly discovered, hierarchical data structures to circumvent the random-access read/fixed write restriction that had previously kept sophisticated analysis algorithms from running solely on graphics hardware. For 3D processing, we apply known game graphics concepts such as mip-maps, projective texturing, and dependent texture lookups to show how video/volume processing can benefit algorithmically from being implemented in a graphics API. The novel GPU data structures provide drastically increased processing speed, and lift processing heavy operations to real-time performance levels, paving the way for new and interactive vision/graphics applications.Graphikhardware wurde in den letzen Jahren immer weiter programmierbar. Ihre APIs verwenden das Streamprozessor-Modell, um die massive Parallelisierung auch für den Programmierer verfügbar zu machen. Leider folgen aus dem strikten Streamprozessor-Modell, welches die GPU für ihre hohe Rechenleistung benötigt, auch Hindernisse in der Portierung von CPU-Algorithmen zur Video- und Volumenverarbeitung auf die GPU. Serielle Datenabhängigkeiten beschleunigen zwar CPU-Verarbeitung, sind aber für die daten-parallele GPU kontraproduktiv . Diese Arbeit präsentiert neue Herangehensweisen für bekannte Probleme der Video- und Volumensverarbeitung. Teilweise wird die Verarbeitung mit Hilfe von modifizierten Algorithmen aus der frühen Computergraphik-Forschung an das beschränkte Hardwaremodell angepasst. Anderswo helfen neu entdeckte, hierarchische Datenstrukturen beim Umgang mit den Schreibzugriff-Restriktionen die lange die Portierung von komplexeren Bildanalyseverfahren verhindert hatten. In der 3D-Verarbeitung nutzen wir bekannte Konzepte aus der Computerspielegraphik wie Mipmaps, projektive Texturierung, oder verkettete Texturzugriffe, und zeigen auf welche Vorteile die Video- und Volumenverarbeitung aus hardwarebeschleunigter Graphik-API-Implementation ziehen kann. Die präsentierten GPU-Datenstrukturen bieten drastisch schnellere Verarbeitung und heben rechenintensive Operationen auf Echtzeit-Niveau. Damit werden neue, interaktive Bildverarbeitungs- und Graphik-Anwendungen möglich

    Biochip-Based Detection of KRAS Mutation in Non-Small Cell Lung Cancer

    Get PDF
    This study is aimed at evaluating the potential of a biochip assay to sensitively detect KRAS mutation in DNA from non-small cell lung cancer (NSCLC) tissue samples. The assay covers 10 mutations in codons 12 and 13 of the KRAS gene, and is based on mutant-enriched PCR followed by reverse-hybridization of biotinylated amplification products to an array of sequence-specific probes immobilized on the tip of a rectangular plastic stick (biochip). Biochip hybridization identified 17 (21%) samples to carry a KRAS mutation of which 16 (33%) were adenocarcinomas and 1 (3%) was a squamous cell carcinoma. All mutations were confirmed by DNA sequencing. Using 10 ng of starting DNA, the biochip assay demonstrated a detection limit of 1% mutant sequence in a background of wild-type DNA. Our results suggest that the biochip assay is a sensitive alternative to protocols currently in use for KRAS mutation testing on limited quantity samples

    Dissecting CD8+ T cell pathology of severe SARS-CoV-2 infection by single-cell immunoprofiling

    Get PDF
    Introduction: SARS-CoV-2 infection results in varying disease severity, ranging from asymptomatic infection to severe illness. A detailed understanding of the immune response to SARS-CoV-2 is critical to unravel the causative factors underlying differences in disease severity and to develop optimal vaccines against new SARS-CoV-2 variants. Methods: We combined single-cell RNA and T cell receptor sequencing with CITE-seq antibodies to characterize the CD8+ T cell response to SARS-CoV-2 infection at high resolution and compared responses between mild and severe COVID-19. Results: We observed increased CD8+ T cell exhaustion in severe SARS-CoV-2 infection and identified a population of NK-like, terminally differentiated CD8+ effector T cells characterized by expression of FCGR3A (encoding CD16). Further characterization of NK-like CD8+ T cells revealed heterogeneity among CD16+ NK-like CD8+ T cells and profound differences in cytotoxicity, exhaustion, and NK-like differentiation between mild and severe disease conditions. Discussion: We propose a model in which differences in the surrounding inflammatory milieu lead to crucial differences in NK-like differentiation of CD8+ effector T cells, ultimately resulting in the appearance of NK-like CD8+ T cell populations of different functionality and pathogenicity. Our in-depth characterization of the CD8+ T cell-mediated response to SARS-CoV-2 infection provides a basis for further investigation of the importance of NK-like CD8+ T cells in COVID-19 severity.</p

    Dissecting CD8+ T cell pathology of severe SARS-CoV-2 infection by single-cell immunoprofiling

    Get PDF
    IntroductionSARS-CoV-2 infection results in varying disease severity, ranging from asymptomatic infection to severe illness. A detailed understanding of the immune response to SARS-CoV-2 is critical to unravel the causative factors underlying differences in disease severity and to develop optimal vaccines against new SARS-CoV-2 variants.MethodsWe combined single-cell RNA and T cell receptor sequencing with CITE-seq antibodies to characterize the CD8+ T cell response to SARS-CoV-2 infection at high resolution and compared responses between mild and severe COVID-19.ResultsWe observed increased CD8+ T cell exhaustion in severe SARS-CoV-2 infection and identified a population of NK-like, terminally differentiated CD8+ effector T cells characterized by expression of FCGR3A (encoding CD16). Further characterization of NK-like CD8+ T cells revealed heterogeneity among CD16+ NK-like CD8+ T cells and profound differences in cytotoxicity, exhaustion, and NK-like differentiation between mild and severe disease conditions.DiscussionWe propose a model in which differences in the surrounding inflammatory milieu lead to crucial differences in NK-like differentiation of CD8+ effector T cells, ultimately resulting in the appearance of NK-like CD8+ T cell populations of different functionality and pathogenicity. Our in-depth characterization of the CD8+ T cell-mediated response to SARS-CoV-2 infection provides a basis for further investigation of the importance of NK-like CD8+ T cells in COVID-19 severity

    MPEG Z/Alpha och högupplösande MPEG-video

    No full text
    The progression of technical development has yielded practicable camera systems for the acquisition of so called depth maps, images with depth information. Images and movies with depth information open the door for new types of applications in the area of computer graphics and vision. That implies that they will need to be processed in all increasing volumes. Increased depth image processing puts forth the demand for a standardized data format for the exchange of image data with depth information, both still and animated. Software to convert acquired depth data to such videoformats is highly necessary. This diploma thesis sheds light on many of the issues that come with this new task group. It spans from data acquisition over readily available software for the data encoding to possible future applications. Further, a software architecture fulfilling all of the mentioned demands is presented. The encoder is comprised of a collection of UNIX programs that generate MPEG Z/Alpha, an MPEG2 based video format. MPEG Z/Alpha contains beside MPEG2's standard data streams one extra data stream to store image depth information (and transparency). The decoder suite, called TexMPEG, is a C library for the in-memory decompression of MPEG Z/Alpha. Much effort has been put into video decoder parallelization, and TexMPEG is now capable of decoding multiple video streams, not only in parallel internally, but also with inherent frame synchronization between parallely decoded MPEG videos

    GPU-Datenstrukturen fĂĽr Computergraphik und Bildverarbeitung

    No full text
    Graphics hardware has in recent years become increasingly programmable, and its programming APIs use the stream processor model to expose massive parallelization to the programmer. Unfortunately, the inherent restrictions of the stream processor model, used by the GPU in order to maintain high performance, often pose a problem in porting CPU algorithms for both video and volume processing to graphics hardware. Serial data dependencies which accelerate CPU processing are counterproductive for the data-parallel GPU. This thesis demonstrates new ways for tackling well-known problems of large scale video/volume analysis. In some instances, we enable processing on the restricted hardware model by re-introducing algorithms from early computer graphics research. On other occasions, we use newly discovered, hierarchical data structures to circumvent the random-access read/fixed write restriction that had previously kept sophisticated analysis algorithms from running solely on graphics hardware. For 3D processing, we apply known game graphics concepts such as mip-maps, projective texturing, and dependent texture lookups to show how video/volume processing can benefit algorithmically from being implemented in a graphics API. The novel GPU data structures provide drastically increased processing speed, and lift processing heavy operations to real-time performance levels, paving the way for new and interactive vision/graphics applications.Graphikhardware wurde in den letzen Jahren immer weiter programmierbar. Ihre APIs verwenden das Streamprozessor-Modell, um die massive Parallelisierung auch für den Programmierer verfügbar zu machen. Leider folgen aus dem strikten Streamprozessor-Modell, welches die GPU für ihre hohe Rechenleistung benötigt, auch Hindernisse in der Portierung von CPU-Algorithmen zur Video- und Volumenverarbeitung auf die GPU. Serielle Datenabhängigkeiten beschleunigen zwar CPU-Verarbeitung, sind aber für die daten-parallele GPU kontraproduktiv . Diese Arbeit präsentiert neue Herangehensweisen für bekannte Probleme der Video- und Volumensverarbeitung. Teilweise wird die Verarbeitung mit Hilfe von modifizierten Algorithmen aus der frühen Computergraphik-Forschung an das beschränkte Hardwaremodell angepasst. Anderswo helfen neu entdeckte, hierarchische Datenstrukturen beim Umgang mit den Schreibzugriff-Restriktionen die lange die Portierung von komplexeren Bildanalyseverfahren verhindert hatten. In der 3D-Verarbeitung nutzen wir bekannte Konzepte aus der Computerspielegraphik wie Mipmaps, projektive Texturierung, oder verkettete Texturzugriffe, und zeigen auf welche Vorteile die Video- und Volumenverarbeitung aus hardwarebeschleunigter Graphik-API-Implementation ziehen kann. Die präsentierten GPU-Datenstrukturen bieten drastisch schnellere Verarbeitung und heben rechenintensive Operationen auf Echtzeit-Niveau. Damit werden neue, interaktive Bildverarbeitungs- und Graphik-Anwendungen möglich

    Authors&apos;Addresses

    No full text
    We present an implementation approach to high-speed Marching Cubes, running entirely on the Graphics Processing Unit of Shader Model 3.0 and 4.0 graphics hardware. Our approach is based on the interpretation of Marching Cubes as a stream compaction and expansion process, and is implemented using the HistoPyramid, a hierarchical data structure previously only used in GPU data compaction. We extend the HistoPyramid structure to allow for stream expansion, which provides an efficient method for generating geometry directly on the GPU, even on Shader Model 3.0 hardware. Currently, our algorithm outperforms all other known GPU-based iso-surface extraction algorithms. We describe our implementation and present a performance analysis on several generations of graphics hardware

    RealTime QuadTree Analysis using HistoPyramids

    No full text
    Region quadtrees are convenient tools for hierarchical image analysis. Like the related Haar wavelets, they are simple to generate within a fixed calculation time. The clustering at each resolution level requires only local data, yet they deliver intuitive classification results. Although the region quadtree partitioning is very rigid, it can be rapidly computed from arbitrary imagery. This research article demonstrates how graphics hardware can be utilized to build region quadtrees at unprecedented speeds. To achieve this, a data-structure called HistoPyramid registers the number of desired image features in a pyramidal 2D array. Then, this HistoPyramid is used as an implicit indexing data structure through quadtree traversal, creating lists of the registered image features directly in GPU memory, and virtually eliminating bus transfers between CPU and GPU. With this novel concept, quadtrees can be applied in real-time video processing on standard PC hardware. A multitude of applications in image and video processing arises, since region quadtree analysis becomes a light-weight preprocessing step for feature clustering in vision tasks, motion vector analysis, PDE calculations, or data compression. In a sidenote, we outline how this algorithm can be applied to 3D volume data, effectively generating region octrees purely on graphics hardware

    Model-based free-viewpoint video acquisition, rendering and encoding

    No full text
    Abstract — In recent years, the convergence of computer vision and computer graphics has put forth free-viewpoint video as a new field of research. The goal is to advance traditional 2D video into an immersive medium that enables the viewer to interactively choose an arbitrary viewpoint in 3D space onto the a scene while it plays back. In this paper we give an overview of a system for reconstructing, rendering and encoding free-viewpoint videos of human actors. It employs a hardware-accelerated marker-free optical motion capture algorithm from multi-view video streams and an a-priori body model to reconstruct shape and motion of a moving actor. Real-time high-quality rendering of the moving person from arbitrary perspectives is achieved by applying a multi-view texturing approach from the video frames. We also present a predictive encoding as well as a 4D-SPIHT wavelet compression mechanism that both exploit the 3D scene geometry for efficient encoding of the multi-view texture images. I
    • …
    corecore